Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
1.
Psychol Methods ; 2023 Dec 21.
Artículo en Inglés | MEDLINE | ID: mdl-38127572

RESUMEN

Network psychometrics leverages pairwise Markov random fields to depict conditional dependencies among a set of psychological variables as undirected edge-weighted graphs. Researchers often intend to compare such psychometric networks across subpopulations, and recent methodological advances provide invariance tests of differences in subpopulation networks. What remains missing, though, is an analogue to an effect size measure that quantifies differences in psychometric networks. We address this gap by complementing recent advances for investigating whether psychometric networks differ with an intuitive similarity measure quantifying the extent to which networks differ. To this end, we build on graph-theoretic approaches and propose a similarity measure based on the Frobenius norm of differences in psychometric networks' weighted adjacency matrices. To assess this measure's utility for quantifying differences between psychometric networks, we study how it captures differences in subpopulation network models implied by both latent variable models and Gaussian graphical models. We show that a wide array of network differences translates intuitively into the proposed measure, while the same does not hold true for customary correlation-based comparisons. In a simulation study on finite-sample behavior, we show that the proposed measure yields trustworthy results when population networks differ and sample sizes are sufficiently large, but fails to identify exact similarity when population networks are the same. From these results, we derive a strong recommendation to only use the measure as a complement to a significant test for network similarity. We illustrate potential insights from quantifying psychometric network similarities through cross-country comparisons of human values networks. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

2.
Neuroimage ; 275: 120160, 2023 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-37169117

RESUMEN

Graph-theoretic metrics derived from neuroimaging data have been heralded as powerful tools for uncovering neural mechanisms of psychological traits, psychiatric disorders, and neurodegenerative diseases. In N = 8,185 human structural connectomes from UK Biobank, we examined the extent to which 11 commonly-used global graph-theoretic metrics index distinct versus overlapping information with respect to interindividual differences in brain organization. Using unthresholded, FA-weighted networks we found that all metrics other than Participation Coefficient were highly intercorrelated, both with each other (mean |r| = 0.788) and with a topologically-naïve summary index of brain structure (mean edge weight; mean |r| = 0.873). In a series of sensitivity analyses, we found that overlap between metrics is influenced by the sparseness of the network and the magnitude of variation in edge weights. Simulation analyses representing a range of population network structures indicated that individual differences in global graph metrics may be intrinsically difficult to separate from mean edge weight. In particular, Closeness, Characteristic Path Length, Global Efficiency, Clustering Coefficient, and Small Worldness were nearly perfectly collinear with one another (mean |r| = 0.939) and with mean edge weight (mean |r| = 0.952) across all observed and simulated conditions. Global graph-theoretic measures are valuable for their ability to distill a high-dimensional system of neural connections into summary indices of brain organization, but they may be of more limited utility when the goal is to index separable components of interindividual variation in specific properties of the human structural connectome.


Asunto(s)
Conectoma , Trastornos Mentales , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Conectoma/métodos , Fenotipo
3.
Multivariate Behav Res ; 58(6): 1134-1159, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37039444

RESUMEN

The use of modern missing data techniques has become more prevalent with their increasing accessibility in statistical software. These techniques focus on handling data that are missing at random (MAR). Although all MAR mechanisms are routinely treated as the same, they are not equal. The impact of missing data on the efficiency of parameter estimates can differ for different MAR variations, even when the amount of missing data is held constant; yet, in current practice, only the rate of missing data is reported. The impact of MAR on the loss of efficiency can instead be more directly measured by the fraction of missing information (FMI). In this article, we explore this impact using FMIs in regression models with one and two predictors. With the help of a Shiny application, we demonstrate that efficiency loss due to missing data can be highly complex and is not always intuitive. We recommend substantive researchers who work with missing data report estimates of FMIs in addition to the rate of missingness. We also encourage methodologists to examine FMIs when designing simulation studies with missing data, and to explore the behavior of efficiency loss under MAR using FMIs in more complex models.


Asunto(s)
Modelos Estadísticos , Programas Informáticos , Interpretación Estadística de Datos , Simulación por Computador
4.
Psychol Methods ; 2022 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-36201820

RESUMEN

Studies of interaction effects are of great interest because they identify crucial interplay between predictors in explaining outcomes. Previous work has considered several potential sources of statistical bias and substantive misinterpretation in the study of interactions, but less attention has been devoted to the role of the outcome variable in such research. Here, we consider bias and false discovery associated with estimates of interaction parameters as a function of the distributional and metric properties of the outcome variable. We begin by illustrating that, for a variety of noncontinuously distributed outcomes (i.e., binary and count outcomes), attempts to use the linear model for recovery leads to catastrophic levels of bias and false discovery. Next, focusing on transformations of normally distributed variables (i.e., censoring and noninterval scaling), we show that linear models again produce spurious interaction effects. We provide explanations offering geometric and algebraic intuition as to why interactions are a challenge for these incorrectly specified models. In light of these findings, we make two specific recommendations. First, a careful consideration of the outcome's distributional properties should be a standard component of interaction studies. Second, researchers should approach research focusing on interactions with heightened levels of scrutiny. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

5.
Psychol Methods ; 2022 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-35878074

RESUMEN

In modern test theory, response variables are a function of a common latent variable that represents the measured attribute, and error variables that are unique to the response variables. While considerable thought goes into the interpretation of latent variables in these models (e.g., validity research), the interpretation of error variables is typically left implicit (e.g., describing error variables as residuals). Yet, many psychometric assumptions are essentially assumptions about error and thus being able to reason about psychometric models requires the ability to reason about errors. We propose a causal theory of error as a framework that enables researchers to reason about errors in terms of the data-generating mechanism. In this framework, the error variable reflects myriad causes that are specific to an item and, together with the latent variable, determine the scores on that item. We distinguish two types of item-specific causes: characteristic variables that differ between people (e.g., familiarity with words used in the item), and circumstance variables that vary over occasions in which the item is administered (e.g., a distracting noise). We show that different assumptions about these unique causes (a) imply different psychometric models; (b) have different implications for the chance experiment that makes these models probabilistic models; and (c) have different consequences for item bias, local homogeneity, and reliability coefficient α and the test-retest correlation. The ability to reason about the causes that produce error variance puts researchers in a better position to motivate modeling choices. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

7.
R Soc Open Sci ; 9(4): 200048, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35425627

RESUMEN

What research practices should be considered acceptable? Historically, scientists have set the standards for what constitutes acceptable research practices. However, there is value in considering non-scientists' perspectives, including research participants'. 1873 participants from MTurk and university subject pools were surveyed after their participation in one of eight minimal-risk studies. We asked participants how they would feel if (mostly) common research practices were applied to their data: p-hacking/cherry-picking results, selective reporting of studies, Hypothesizing After Results are Known (HARKing), committing fraud, conducting direct replications, sharing data, sharing methods, and open access publishing. An overwhelming majority of psychology research participants think questionable research practices (e.g. p-hacking, HARKing) are unacceptable (68.3-81.3%), and were supportive of practices to increase transparency and replicability (71.4-80.1%). A surprising number of participants expressed positive or neutral views toward scientific fraud (18.7%), raising concerns about data quality. We grapple with this concern and interpret our results in light of the limitations of our study. Despite the ambiguity in our results, we argue that there is evidence (from our study and others') that researchers may be violating participants' expectations and should be transparent with participants about how their data will be used.

9.
Br J Math Stat Psychol ; 75(1): 158-181, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34632565

RESUMEN

Random effects in longitudinal multilevel models represent individuals' deviations from population means and are indicators of individual differences. Researchers are often interested in examining how these random effects predict outcome variables that vary across individuals. This can be done via a two-step approach in which empirical Bayes (EB) estimates of the random effects are extracted and then treated as observed predictor variables in follow-up regression analyses. This approach ignores the unreliability of EB estimates, leading to underestimation of regression coefficients. As such, previous studies have recommended a multilevel structural equation modeling (ML-SEM) approach that treats random effects as latent variables. The current study uses simulation and empirical data to show that a bias-variance tradeoff exists when selecting between the two approaches. ML-SEM produces generally unbiased regression coefficient estimates but also larger standard errors, which can lead to lower power than the two-step approach. Implications of the results for model selection and alternative solutions are discussed.


Asunto(s)
Teorema de Bayes , Sesgo , Humanos , Análisis de Clases Latentes , Análisis Multinivel , Análisis de Regresión
10.
Psychol Methods ; 2021 Dec 20.
Artículo en Inglés | MEDLINE | ID: mdl-34928677

RESUMEN

Ordinal data are extremely common in psychological research, with variables often assessed using Likert-type scales that take on only a few values. At the same time, researchers are increasingly fitting network models to ordinal item-level data. Yet very little work has evaluated how network estimation techniques perform when data are ordinal. We use a Monte Carlo simulation to evaluate and compare the performance of three estimation methods applied to either Pearson or polychoric correlations: extended Bayesian information criterion graphical lasso with regularized edge estimates ("EBIC"), Bayesian information criterion model selection with partial correlation edge estimates ("BIC"), and multiple regression with p-value-based edge selection and partial correlation edge estimates ("MR"). We vary the number and distribution of thresholds, distribution of the underlying continuous data, sample size, model size, and network density, and we evaluate results in terms of model structure (sensitivity and false positive rate) and edge weight bias. Our results show that the effect of treating the data as ordinal versus continuous depends primarily on the number of levels in the data, and that estimation performance was affected by the sample size, the shape of the underlying distribution, and the symmetry of underlying thresholds. Furthermore, which estimation method is recommended depends on the research goals: MR methods tended to maximize sensitivity of edge detection, BIC approaches minimized false positives, and either one of these produced accurate edge weight estimates in sufficiently large samples. We identify some particularly difficult combinations of conditions for which no method produces stable results. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

11.
Psychophysiology ; 58(6): e13793, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33782996

RESUMEN

Event-related potentials (ERPs) can be very noisy, and yet, there is no widely accepted metric of ERP data quality. Here, we propose a universal measure of data quality for ERP research-the standardized measurement error (SME)-which is a special case of the standard error of measurement. Whereas some existing metrics provide a generic quantification of the noise level, the SME quantifies the data quality (precision) for the specific amplitude or latency value being measured in a given study (e.g., the peak latency of the P3 wave). It can be applied to virtually any value that is derived from averaged ERP waveforms, making it a universal measure of data quality. In addition, the SME quantifies the data quality for each individual participant, making it possible to identify participants with low-quality data and "bad" channels. When appropriately aggregated across individuals, SME values can be used to quantify the combined impact of the single-trial EEG noise and the number of trials being averaged together on the effect size and statistical power in a given experiment. If SME values were regularly included in published articles, researchers could identify the recording and analysis procedures that produce the highest data quality, which could ultimately lead to increased effect sizes and greater replicability across the field.


Asunto(s)
Exactitud de los Datos , Electroencefalografía , Potenciales Evocados , Relación Señal-Ruido , Humanos , Modelos Estadísticos , Procesamiento de Señales Asistido por Computador
12.
Multivariate Behav Res ; 56(2): 314-328, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-30463456

RESUMEN

Steinley, Hoffman, Brusco, and Sher (2017) proposed a new method for evaluating the performance of psychological network models: fixed-margin sampling. The authors investigated LASSO regularized Ising models (eLasso) by generating random datasets with the same margins as the original binary dataset, and concluded that many estimated eLasso parameters are not distinguishable from those that would be expected if the data were generated by chance. We argue that fixed-margin sampling cannot be used for this purpose, as it generates data under a particular null-hypothesis: a unidimensional factor model with interchangeable indicators (i.e., the Rasch model). We show this by discussing relevant psychometric literature and by performing simulation studies. Results indicate that while eLasso correctly estimated network models and estimated almost no edges due to chance, fixed-margin sampling performed poorly in classifying true effects as "interesting" (Steinley et al. 2017, p. 1004). Further simulation studies indicate that fixed-margin sampling offers a powerful method for highlighting local misfit from the Rasch model, but performs only moderately in identifying global departures from the Rasch model. We conclude that fixed-margin sampling is not up to the task of assessing if results from estimated Ising models or other multivariate psychometric models are due to chance.


Asunto(s)
Modelos Estadísticos , Proyectos de Investigación , Simulación por Computador , Probabilidad , Psicometría
13.
Multivariate Behav Res ; 56(2): 175-198, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-31617420

RESUMEN

Networks are gaining popularity as an alternative to latent variable models for representing psychological constructs. Whereas latent variable approaches introduce unobserved common causes to explain the relations among observed variables, network approaches posit direct causal relations between observed variables. While these approaches lead to radically different understandings of the psychological constructs of interest, recent articles have established mathematical equivalences that hold between network models and latent variable models. We argue that the fact that for any model from one class there is an equivalent model from the other class does not mean that both models are equally plausible accounts of the data-generating mechanism. In many cases the constraints that are meaningful in one framework translate to constraints in the equivalent model that lack a clear interpretation in the other framework. Finally, we discuss three diverging predictions for the relation between zero-order correlations and partial correlations implied by sparse network models and unidimensional factor models. We propose a test procedure that compares the likelihoods of these models in light of these diverging implications. We use an empirical example to illustrate our argument.


Asunto(s)
Modelos Estadísticos , Modelos Teóricos
14.
Multivariate Behav Res ; 56(2): 288-302, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-31672065

RESUMEN

Network models are gaining popularity as a way to estimate direct effects among psychological variables and investigate the structure of constructs. A key feature of network estimation is determining which edges are likely to be non-zero. In psychology, this is commonly achieved through the graphical lasso regularization method that estimates a precision matrix of Gaussian variables using an ℓ1-penalty to push small values to zero. A tuning parameter, λ, controls the sparsity of the network. There are many methods to select λ, which can lead to vastly different graphs. The most common approach in psychological network applications is to minimize the extended Bayesian information criterion, but the consistency of this method for model selection has primarily been examined in high dimensional settings (i.e., n < p) that are uncommon in psychology. Further, there is some evidence that alternative selection methods may have superior performance. Here, using simulation, we compare four different methods for selecting λ, including the stability approach to regularization selection (StARS), K-fold cross-validation, the rotation information criterion (RIC), and the extended Bayesian information criterion (EBIC). Our results demonstrate that penalty parameter selection should be made based on data characteristics and the inferential goal (e.g., to increase sensitivity versus to avoid false positives). We end with recommendations for selecting the penalty parameter when using the graphical lasso.


Asunto(s)
Teorema de Bayes , Simulación por Computador , Distribución Normal
15.
Pers Soc Psychol Bull ; 47(11): 1535-1549, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-33342369

RESUMEN

Participants in experience sampling method (ESM) studies are "beeped" several times per day to report on their momentary experiences-but participants do not always answer the beep. Knowing whether there are systematic predictors of missing a report is critical for understanding the extent to which missing data threatens the validity of inferences from ESM studies. Here, 228 university students completed up to four ESM reports per day while wearing the Electronically Activated Recorder (EAR)-an unobtrusive audio recording device-for a week. These audio recordings provided an alternative source of information about what participants were doing when they missed or completed reports (3,678 observations). We predicted missing ESM reports from 46 variables coded from the EAR recordings, and found very little evidence that missing an ESM report was correlated with constructs typically of interest to ESM researchers. These findings provide reassuring evidence for the validity of ESM research among relatively healthy university student samples.


Asunto(s)
Evaluación Ecológica Momentánea , Universidades , Humanos , Estudiantes
16.
Infancy ; 25(4): 393-419, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32744759

RESUMEN

As in many areas of science, infant research suffers from low power. The problem is further compounded in infant research because of the difficulty in recruiting and testing large numbers of infant participants. Researchers have been searching for a solution and, as illustrated by this special section, have been focused on getting the most out of infant data. We illustrate one solution by showing how we can increase power in visual preference tasks by increasing the amount of data obtained from each infant. We discuss issues of power and present work examining how, under some circumstances, power is increased by increasing the precision of measurement. We report the results of a series of simulations based on a sample of visual preference task data collected from three infant laboratories showing how more powerful research designs can be achieved by including more trials per infant. Implications for infant procedures in general are discussed.


Asunto(s)
Investigación Biomédica/métodos , Desarrollo Infantil , Conducta del Lactante , Proyectos de Investigación , Interpretación Estadística de Datos , Conjuntos de Datos como Asunto , Humanos , Lactante , Selección de Paciente , Tamaño de la Muestra , Percepción Visual
17.
PLoS One ; 15(7): e0236893, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32730328

RESUMEN

We created a facet atlas that maps the interrelations between facet scales from 13 hierarchical personality inventories to provide a practically useful, transtheoretical description of lower-level personality traits. We generated this atlas by estimating a series of network models that visualize the correlations among 268 facet scales administered to the Eugene-Springfield Community Sample (Ns = 571-948). As expected, most facets contained a blend of content from multiple Big Five domains and were part of multiple Big Five networks. We identified core and peripheral facets for each Big Five domain. Results from this study resolve some inconsistencies in facet placement across instruments and highlight the complexity of personality structure relative to the constraints of traditional hierarchical models that impose simple structure. This facet atlas (also available as an online point-and-click app at tedschwaba.shinyapps.io/appdata/) provides a guide for researchers who wish to measure a domain with a limited set of facets as well as information about the core and periphery of each personality domain. To illustrate the value of a facet atlas in applied and theoretical settings, we examined the network structure of scales measuring impulsivity and tested structural hypotheses from the Big Five Aspect Scales inventory.


Asunto(s)
Individualidad , Redes Neurales de la Computación , Desarrollo de la Personalidad , Inventario de Personalidad/estadística & datos numéricos , Personalidad/clasificación , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Análisis Factorial , Femenino , Estudios de Seguimiento , Humanos , Conducta Impulsiva , Masculino , Persona de Mediana Edad , Modelos Psicológicos , Psicometría , Adulto Joven
18.
Behav Res Methods ; 52(6): 2306-2323, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32333330

RESUMEN

Psychologists use scales comprised of multiple items to measure underlying constructs. Missing data on such scales often occur at the item level, whereas the model of interest to the researcher is at the composite (scale score) level. Existing analytic approaches cannot easily accommodate item-level missing data when models involve composites. A very common practice in psychology is to average all available items to produce scale scores. This approach, referred to as available-case maximum likelihood (ACML), may produce biased parameter estimates. Another approach researchers use to deal with item-level missing data is scale-level full information maximum likelihood (SL-FIML), which treats the whole scale as missing if any item is missing. SL-FIML is inefficient and it may also exhibit bias. Multiple imputation (MI) produces the correct results using a simulation-based approach. We study a new analytic alternative for item-level missingness, called two-stage maximum likelihood (TSML; Savalei & Rhemtulla, Journal of Educational and Behavioral Statistics, 42(4), 405-431. 2017). The original work showed the method outperforming ACML and SL-FIML in structural equation models with parcels. The current simulation study examined the performance of ACML, SL-FIML, MI, and TSML in the context of univariate regression. We demonstrated performance issues encountered by ACML and SL-FIML when estimating regression coefficients, under both MCAR and MAR conditions. Aside from convergence issues with small sample sizes and high missingness, TSML performed similarly to MI in all conditions, showing negligible bias, high efficiency, and good coverage. This fast analytic approach is therefore recommended whenever it achieves convergence. R code and a Shiny app to perform TSML are provided.


Asunto(s)
Proyectos de Investigación , Sesgo , Interpretación Estadística de Datos , Humanos , Funciones de Verosimilitud , Tamaño de la Muestra
19.
Psychol Methods ; 25(1): 30-45, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-31169371

RESUMEN

Previous research and methodological advice has focused on the importance of accounting for measurement error in psychological data. That perspective assumes that psychological variables conform to a common factor model. We explore what happens when data that are not generated from a common factor model are nonetheless modeled as reflecting a common factor. Through a series of hypothetical examples and an empirical reanalysis, we show that when a common factor model is misused, structural parameter estimates that indicate the relations among psychological constructs can be severely biased. Moreover, this bias can arise even when model fit is perfect. In some situations, composite models perform better than common factor models. These demonstrations point to a need for models to be justified on substantive, theoretical bases in addition to statistical ones. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Interpretación Estadística de Datos , Modelos Estadísticos , Psicología/métodos , Psicología/normas , Proyectos de Investigación/normas , Humanos
20.
Nat Hum Behav ; 3(5): 513-525, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30962613

RESUMEN

Genetic correlations estimated from genome-wide association studies (GWASs) reveal pervasive pleiotropy across a wide variety of phenotypes. We introduce genomic structural equation modelling (genomic SEM): a multivariate method for analysing the joint genetic architecture of complex traits. Genomic SEM synthesizes genetic correlations and single-nucleotide polymorphism heritabilities inferred from GWAS summary statistics of individual traits from samples with varying and unknown degrees of overlap. Genomic SEM can be used to model multivariate genetic associations among phenotypes, identify variants with effects on general dimensions of cross-trait liability, calculate more predictive polygenic scores and identify loci that cause divergence between traits. We demonstrate several applications of genomic SEM, including a joint analysis of summary statistics from five psychiatric traits. We identify 27 independent single-nucleotide polymorphisms not previously identified in the contributing univariate GWASs. Polygenic scores from genomic SEM consistently outperform those from univariate GWASs. Genomic SEM is flexible and open ended, and allows for continuous innovation in multivariate genetic analysis.


Asunto(s)
Estudio de Asociación del Genoma Completo/estadística & datos numéricos , Genómica/métodos , Análisis de Clases Latentes , Herencia Multifactorial/genética , Análisis Factorial , Humanos , Trastornos Mentales/genética , Trastornos Mentales/fisiopatología , Análisis Multivariante , Polimorfismo de Nucleótido Simple
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...